MBBOS-GCN: minimum bounding box over-segmentation—graph convolution 3D point cloud deep learning model

نویسندگان

چکیده

Point cloud data with high accuracy and density is an important source for the depiction of real ground objects, there a broad research prospect using point directly 3D object detection recognition deep learning methods. However, many models in previous ignored structure information sampling randomness. To overcome this limitation, we proposed innovative model, namely, minimum bounding box over-segmentation–graph convolution network model (MBBOS-GCN) enhancing structural perception capability reduce In MBBOS-GCN, number points sampled used as scale, modified graph to collect from different scales. The divided into several small regions by algorithm, farthest (FPS) algorithm sample within each region experiments on classification semantic scene segmentation show that: (1) MBBOS-GCN has accuracy, which up 91.87% 89.5% ModelNet40 dataset ScanNet dataset, respectively; (2) provided good stability robustness little change under altering input data, slight loss value; (3) can be adapted complex scenes when reaches 97.53%. These superior performance provide effective support construction digital twin city background calibration multimode satellite feature inversion validation.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Approximating the Minimum Volume Bounding Box of a Point Set

Isn’t it an artificial, sterilized, didactically pruned world, a mere sham world in which you cravenly vegetate, a world without vices, without passions without hunger, without sap and salt, a world without family, without mothers, without children, almost without women? The instinctual life is tamed by meditation. For generations you have left to others dangerous, daring, and responsible thing...

متن کامل

PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation

We present PointFusion, a generic 3D object detection method that leverages both image and 3D point cloud information. Unlike existing methods that either use multistage pipelines or hold sensor and dataset-specific assumptions, PointFusion is conceptually simple and applicationagnostic. The image data and the raw point cloud data are independently processed by a CNN and a PointNet architecture...

متن کامل

Learning 3D Point Cloud Histograms

In this paper we show how using histograms based on the angular relationships between a subset of point normals in a 3D point Cloud can be used in a machine learning algorithm in order to recognize different classes of objects given by their 3D point clouds. This approach extends the work done by Gary Bradski at Willow Garage on point clouds recognition by applying a machine learning approach t...

متن کامل

Chapter 20 Approximating the Minimum Volume Bounding Box of a Point Set

Isn’t it an artificial, sterilized, didactically pruned world, a mere sham world in which you cravenly vegetate, a world without vices, without passions without hunger, without sap and salt, a world without family, without mothers, without children, almost without women? The instinctual life is tamed by meditation. For generations you have left to others dangerous, daring, and responsible thing...

متن کامل

Chapter 19 Approximating the Minimum Volume Bounding Box of a Point Set

Isn’t it an artificial, sterilized, didactically pruned world, a mere sham world in which you cravenly vegetate, a world without vices, without passions without hunger, without sap and salt, a world without family, without mothers, without children, almost without women? The instinctual life is tamed by meditation. For generations you have left to others dangerous, daring, and responsible thing...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Applied Remote Sensing

سال: 2022

ISSN: ['1931-3195']

DOI: https://doi.org/10.1117/1.jrs.16.016502